Goto

Collaborating Authors

 privacy challenge


AI can predict political orientations from blank faces – and researchers fear 'serious' privacy challenges

FOX News

Rep. Jay Obernolte was selected to lead the House task force on AI. Fox News Digital speaks with the California Republican about his goals for the panel and his own thoughts about the rapidly advancing technology. Researchers are warning that facial recognition technologies are "more threatening than previously thought" and pose "serious challenges to privacy" after a study found that artificial intelligence can be successful in predicting a person's political orientation based on images of expressionless faces. A recent study published in the journal American Psychologist says an algorithm's ability to accurately guess one's political views is "on par with how well job interviews predict job success, or alcohol drives aggressiveness." Lead author Michal Kosinski told Fox News Digital that 591 participants filled out a political orientation questionnaire before the AI captured what he described as a numerical "fingerprint" of their faces and compared them to a database of their responses to predict their views.


Human-Centered Privacy Research in the Age of Large Language Models

Li, Tianshi, Das, Sauvik, Lee, Hao-Ping, Wang, Dakuo, Yao, Bingsheng, Zhang, Zhiping

arXiv.org Artificial Intelligence

The emergence of large language models (LLMs), and their increased use in user-facing systems, has led to substantial privacy concerns. To date, research on these privacy concerns has been model-centered: exploring how LLMs lead to privacy risks like memorization, or can be used to infer personal characteristics about people from their content. We argue that there is a need for more research focusing on the human aspect of these privacy issues: e.g., research on how design paradigms for LLMs affect users' disclosure behaviors, users' mental models and preferences for privacy controls, and the design of tools, systems, and artifacts that empower end-users to reclaim ownership over their personal data. To build usable, efficient, and privacy-friendly systems powered by these models with imperfect privacy properties, our goal is to initiate discussions to outline an agenda for conducting human-centered research on privacy issues in LLM-powered systems. This Special Interest Group (SIG) aims to bring together researchers with backgrounds in usable security and privacy, human-AI collaboration, NLP, or any other related domains to share their perspectives and experiences on this problem, to help our community establish a collective understanding of the challenges, research opportunities, research methods, and strategies to collaborate with researchers outside of HCI.


Security and Privacy Issues of Federated Learning

Hasan, Jahid

arXiv.org Artificial Intelligence

Federated Learning (FL) has emerged as a promising approach to address data privacy and confidentiality concerns by allowing multiple participants to construct a shared model without centralizing sensitive data. However, this decentralized paradigm introduces new security challenges, necessitating a comprehensive identification and classification of potential risks to ensure FL's security guarantees. This paper presents a comprehensive taxonomy of security and privacy challenges in Federated Learning (FL) across various machine learning models, including large language models. We specifically categorize attacks performed by the aggregator and participants, focusing on poisoning attacks, backdoor attacks, membership inference attacks, generative adversarial network (GAN) based attacks, and differential privacy attacks. Additionally, we propose new directions for future research, seeking innovative solutions to fortify FL systems against emerging security risks and uphold sensitive data confidentiality in distributed learning environments.


Remote Data Science Part 2: Introduction to PySyft and PyGrid

#artificialintelligence

This post is a continuation of "Remote Data Science Part 1: Today's privacy challenges in BigData". The previous blog talks about the importance of understanding privacy challenges in BigData and explains how "Remote Data Science" enables three privacy guarantees for the data scientist and the data owner. This blog explains the different components of Remote Data Science. Understand "Model-centric FL" and "Data-centric FL" while both are deployable in Remote Data Science Architecture. PyGrid is a peer-to-peer network of data curators/owners and data scientists who can collectively train AI models using PySyft on decentralised data (Data never leaves the device).


Building Privacy Into AI: Is the Future Federated?

#artificialintelligence

The changing dynamics of the digital world have led to several privacy challenges for businesses, large and small. This is placing increasing pressure on them to evolve their processes and strategies. Much of the burden stems from the sheer volume of data present today, and in fact, the volume of data is predicted to balloon to 175 zettabytes (ZB) by 2025. Today, it is simply beyond human capability to effectively process and protect privacy without the assistance of privacy-enhancing technologies (PETs). This has led to an explosion of adaptive machine learning (ML) algorithms that can wade through the mountain of data while continuously and efficiently changing their behavior in real-time as new data streams are fed into them.


Could Synthetic Data Be the Future of Data Sharing? - CPO Magazine

#artificialintelligence

Synthetic data generation (SDG) is rapidly emerging as a practical privacy enhancing technology (PET) for sharing data for secondary purposes. It does so by generating non-identifiable datasets that can be used and disclosed without the legislative need for additional consent given that these datasets would not be considered personal information. Having worked in the privacy and data anonymization space for over 15 years, the limitations of traditional de-identification methods are becoming more evident. This creates room for modern PETs that can enable the responsible processing of data for secondary purposes. There's a growing appetite from CPOs to understand where SDG fits as a PET, how it's generated, what problems it can solve, as well as how laws and regulations apply. In a nutshell, synthetic data is generated from real data.


China's Privacy Challenges with AI and Mobile Apps

#artificialintelligence

China's rapidly growing tech economy is now facing some serious questions about the trade-offs involved in the widespread adoption of emerging technologies such as AI. In fact, China's Ministry of Science and Technology is now leading the debate over the relative benefits and drawbacks of artificial intelligence, with at least some recognition that certain AI applications – such as facial recognition technology – might have some very negative implications for personal privacy. At the same time, other regulatory authorities within China – including the Cyberspace Administration of China – are now taking a closer look at how popular consumer technologies (including mobile apps) might also be going too far when it comes to collecting, using and sharing user data. For now, the most high-profile emerging technology within China is artificial intelligence (AI), which is being embraced much more quickly and widely than in the West. For example, Chinese law enforcement authorities are using AI-powered facial recognition technologies to crack down on crime and terrorism, while urban planners and other policymakers are embracing AI as a way to come up with more efficient healthcare, education and transportation solutions.


Artificial Intelligence and the Privacy Challenge

#artificialintelligence

Proponents of artificial intelligence (AI) hail the advances in the ability of machines to make independent decisions based on an analysis on the environment as the next step in machine intelligence – and claim that it will revolutionize complex problem solving across a wide spectrum of human endeavor. The simplest definition of AI is that of an'intelligent' machine that exhibits all the attributes of a flexible, rational agent that perceives its environment and makes decisions – and in many instances takes actions that maximize the chances of success when engaged in a particular task. If one looks at a popular definition, Artificial Intelligence machines mimic human cognitive function. They can learn and solve problems. One of the oldest and most well accepted tests on whether a machine exhibits true AI is the Turing Test. Machine AI can pass the 65-year-old Turing Test if the computer is mistaken for a human more than 30% of the time during a series of five-minute keyboard conversations.


Artificial Intelligence and the Privacy Challenge - CPO Magazine

#artificialintelligence

Proponents of artificial intelligence (AI) hail the advances in the ability of machines to make independent decisions based on an analysis on the environment as the next step in machine intelligence – and claim that it will revolutionize complex problem solving across a wide spectrum of human endeavor. The simplest definition of AI is that of an'intelligent' machine that exhibits all the attributes of a flexible, rational agent that perceives its environment and makes decisions – and in many instances takes actions that maximize the chances of success when engaged in a particular task. If one looks at a popular definition, Artificial Intelligence machines mimic human cognitive function. They can learn and solve problems. One of the oldest and most well accepted tests on whether a machine exhibits true AI is the Turing Test. Machine AI can pass the 65-year-old Turing Test if the computer is mistaken for a human more than 30% of the time during a series of five-minute keyboard conversations.


When it comes to machine learning, is privacy possible? #VentureCanvas Tech Talk

#artificialintelligence

We're seeing the advance of AI everywhere we look--from ordinary applications like our voice-operated virtual assistants, to extraordinary innovations that are changing how we optimize operations, predict behaviour, and even diagnose disease. This presents an unprecedented opportunity for Canada to become a global leader in technology. Canada already produces some of the most sought-after AI talent in the world and has committed upwards of $1 billion in investment. However, along with opportunity, AI comes with many risks. Among the major concerns is how AI technologies will require new ways of thinking about privacy and policy.